# Lightweight ViT

Openvision Vit Base Patch8 160
Apache-2.0
OpenVision-ViT-Tiny is a fully open, cost-effective advanced visual encoder, part of the OpenVision family, focusing on multimodal learning.
Image Classification Transformers
O
UCSC-VLAA
26
0
Levit 128.fb Dist In1k Finetuned Stroke Binary
Apache-2.0
A vision Transformer model based on the LeViT-128 architecture, fine-tuned for binary stroke detection tasks
Image Classification Transformers
L
BTX24
18
1
H0 Mini
H0-mini is a lightweight histology foundation model jointly developed by Owkin and Bioptimus, based on the Vision Transformer architecture, trained via self-supervised distillation, and suitable for pathological image analysis.
Image Classification
H
bioptimus
89
3
Ai Image Detect Distilled
MIT
A lightweight image classification model based on ViT architecture, specifically designed to detect differences between AI-generated images and real images
Image Classification Transformers
A
jacoballessio
7,054
2
Vit Tiny Patch8 112.arcface Ms1mv3
A Vision Transformer (ViT) model trained on the MS1MV3 dataset, specifically designed for face recognition tasks.
Face-related
V
gaunernst
371
1
Vit Tiny Patch8 112.cosface Ms1mv3
A Vision Transformer (ViT) model trained on the MS1MV3 dataset, specifically designed for face recognition tasks
Face-related
V
gaunernst
28
0
Dinov2 Small Imagenet1k 1 Layer
Apache-2.0
A small vision Transformer model trained using the DINOv2 method, suitable for image feature extraction and classification tasks
Image Classification Transformers
D
facebook
50.86k
2
Mobilevit Small 10k Steps
Other
This model is a fine-tuned version of apple/deeplabv3-mobilevit-small on the Efferbach/lane_master2 dataset for image segmentation tasks.
Image Segmentation Transformers
M
Efferbach
13
0
Pavit
MIT
PaViT is an image recognition model based on Pathway Vision Transformer, inspired by Google's PaLM, focusing on the application of few-shot learning techniques in image recognition tasks.
Image Classification Supports Multiple Languages
P
Ajibola
20
2
Deit Tiny Distilled Patch16 224
Apache-2.0
This model is a distilled version of the Data-efficient image Transformer (DeiT), pretrained and fine-tuned on ImageNet-1k at 224x224 resolution, efficiently learning from a teacher model through distillation.
Image Classification Transformers
D
facebook
6,016
6
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase